Linearly constrained reconstruction of functions by kernels with applications to machine learning

نویسندگان

  • Robert Schaback
  • J. Werner
چکیده

This paper investigates the approximation of multivariate functions from data via linear combinations of translates of a positive definite kernel from a reproducing kernel Hilbert space. If standard interpolation conditions are relaxed by Chebyshev–type constraints, one can minimize the norm of the approximant in the Hilbert space under these constraints. By standard arguments of optimization theory, the solutions will take a simple form, based on the data related to the active constraints, called support vectors in the context of machine learning, The corresponding quadratic programming problems are investigated to some extent. Using monotonicity results concerning the Hilbert space norm, iterative techniques based on small quadratic subproblems on active sets are shown to be finite, even if they drop part of their previous information and even if they are used for infinite data, e.g. in the context of online learning. Numerical experiments confirm the theoretical results.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

یادگیری نیمه نظارتی کرنل مرکب با استفاده از تکنیک‌های یادگیری معیار فاصله

Distance metric has a key role in many machine learning and computer vision algorithms so that choosing an appropriate distance metric has a direct effect on the performance of such algorithms. Recently, distance metric learning using labeled data or other available supervisory information has become a very active research area in machine learning applications. Studies in this area have shown t...

متن کامل

Composite Kernel Optimization in Semi-Supervised Metric

Machine-learning solutions to classification, clustering and matching problems critically depend on the adopted metric, which in the past was selected heuristically. In the last decade, it has been demonstrated that an appropriate metric can be learnt from data, resulting in superior performance as compared with traditional metrics. This has recently stimulated a considerable interest in the to...

متن کامل

Forecasting the Tehran Stock market by Machine ‎Learning Methods using a New Loss Function

Stock market forecasting has attracted so many researchers and investors that ‎many studies have been done in this field. These studies have led to the ‎development of many predictive methods, the most widely used of which are ‎machine learning-based methods. In machine learning-based methods, loss ‎function has a key role in determining the model weights. In this study a new loss ‎function is ...

متن کامل

Learning Sequence Kernels

Kernel methods are used to tackle a variety of learning tasks including classification, regression, ranking, clustering, and dimensionality reduction. The appropriate choice of a kernel is often left to the user. But, poor selections may lead to a sub-optimal performance. Instead, sample points can be used to learn a kernel function appropriate for the task by selecting one out of a family of k...

متن کامل

A Novel Frequency Domain Linearly Constrained Minimum Variance Filter for Speech Enhancement

A reliable speech enhancement method is important for speech applications as a pre-processing step to improve their overall performance. In this paper, we propose a novel frequency domain method for single channel speech enhancement. Conventional frequency domain methods usually neglect the correlation between neighboring time-frequency components of the signals. In the proposed method, we take...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Adv. Comput. Math.

دوره 25  شماره 

صفحات  -

تاریخ انتشار 2006